skip to main content


Search for: All records

Creators/Authors contains: "Leu, Ming C."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2024
  2. Free, publicly-accessible full text available June 8, 2024
  3. Abstract As artificial intelligence and industrial automation are developing, human–robot collaboration (HRC) with advanced interaction capabilities has become an increasingly significant area of research. In this paper, we design and develop a real-time, multi-model HRC system using speech and gestures. A set of 16 dynamic gestures is designed for communication from a human to an industrial robot. A data set of dynamic gestures is designed and constructed, and it will be shared with the community. A convolutional neural network is developed to recognize the dynamic gestures in real time using the motion history image and deep learning methods. An improved open-source speech recognizer is used for real-time speech recognition of the human worker. An integration strategy is proposed to integrate the gesture and speech recognition results, and a software interface is designed for system visualization. A multi-threading architecture is constructed for simultaneously operating multiple tasks, including gesture and speech data collection and recognition, data integration, robot control, and software interface operation. The various methods and algorithms are integrated to develop the HRC system, with a platform constructed to demonstrate the system performance. The experimental results validate the feasibility and effectiveness of the proposed algorithms and the HRC system. 
    more » « less
  4. null (Ed.)
    Abstract Pellet-based extrusion deposition of carbon fiber-reinforced composites at high material deposition rates has recently gained much attention due to its applications in large-scale additive manufacturing. The mechanical and physical properties of large-volume components largely depend on their reinforcing fiber length. However, very few studies have been done thus far to have a direct comparison of additively fabricated composites reinforced with different carbon fiber lengths. In this study, a new additive manufacturing (AM) approach to fabricate long fiber-reinforced polymer (LFRP) was first proposed. A pellet-based extrusion deposition method was implemented, which directly used thermoplastic pellets and continuous fiber tows as feedstock materials. Discontinuous long carbon fibers, with an average fiber length of 20.1 mm, were successfully incorporated into printed LFRP samples. The printed LFRP samples were compared with short fiber-reinforced polymer (SFRP) and continuous fiber-reinforced polymer (CFRP) counterparts through mechanical tests and microstructural analyses. The carbon fiber dispersion, distribution of carbon fiber length and orientation, and fiber wetting were studied. As expected, a steady increase in flexural strength was observed with increasing fiber length. The carbon fibers were highly oriented along the printing direction. A more uniformly distributed discontinuous fiber reinforcement was found within printed SFRP and LFRP samples. Due to decreased fiber impregnation time and lowered impregnation rate, the printed CFRP samples showed a lower degree of impregnation and worse fiber wetting conditions. The feasibility of the proposed AM methods was further demonstrated by fabricating large-volume components with complex geometries. 
    more » « less
  5. null (Ed.)
  6. Abstract

    With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.

     
    more » « less
  7. Abstract

    Human-robot collaboration (HRC) is a challenging task in modern industry and gesture communication in HRC has attracted much interest. This paper proposes and demonstrates a dynamic gesture recognition system based on Motion History Image (MHI) and Convolutional Neural Networks (CNN). Firstly, ten dynamic gestures are designed for a human worker to communicate with an industrial robot. Secondly, the MHI method is adopted to extract the gesture features from video clips and generate static images of dynamic gestures as inputs to CNN. Finally, a CNN model is constructed for gesture recognition. The experimental results show very promising classification accuracy using this method.

     
    more » « less